Cache Sharing in QImPrESS Performance Models

نویسنده

  • V. Babka
چکیده

The aim of the Q-ImPrESS project [Q-ImPrESS, 2009] is to develop methods and tools for predicting impact of changes in existing software systems on quality attributes such as performance, reliability and maintainability. The methods are centered around creating a model of the software system and performing the changes first in the model, transforming the model to a prediction model and solving the latter to obtain the desired quality estimates. Our goal is to extend performance prediction models used within Q-ImPrESS to include effects of implicit resource sharing, which are mostly neglected, yet often have significant impact on performance. This paper summarizes the current state of our work regarding modeling of processor cache sharing, highlights the effects we observed in past experiments, and outlines the upcoming work of incorporating some of these effects in suitable Q-ImPrESS performance models.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

Spatial and Temporal Cache Sharing Analysis in Tasks

Understanding performance of large scale multicore systems is crucial for getting faster execution times and optimize workload efficiency, but it is becoming harder due to the increased complexity of hardware architectures. Cache sharing is a key component for performance in modern architectures, and it has been the focus of performance analysis tools and techniques in recent years. At the same...

متن کامل

Effect of Data Sharing on Private Cache Design in Chip Multiprocessors

In multithreaded applications with high degree of data sharing, the miss rate of private cache is shown to exhibit a compulsory miss component. It manifests because at least some of the shared data originates from other cores and can only be accessed in a shared cache. The compulsory component does not change with the private cache size, causing its miss rate to diminish slower as the cache siz...

متن کامل

ParaWeaver: Performance Evaluation on Programming Models for Fine Grained Threads

There is a trend towards multicore or manycore processors in computer architecture design. In addition, several parallel programming models have been introduced. Some extract concurrent threads implicitly whenever possible, resulting in fine grained threads. Others construct threads by explicit user specifications in the program, resulting in coarse grained threads. How these two mechanisms imp...

متن کامل

Reconciling Sharing and Spatial Locality Using Adjustable Block Size Coherent Caches

Several studies have shown that the performance of coherent caches depends on the relationship between the cache block size and the granularity of sharing and locality exhibited by the program. Large cache blocks exploit processor and spatial locality, but may cause unnecessary cache invalidations due to false sharing. Small cache blocks can reduce the number of cache invalidations, but increas...

متن کامل

An Analysis of Cache Sharing in Chip Multiprocessors

We present the effects of L1 and L2 cache sharing on cache miss rates, cache line invalidations, and constuctive and destructive interference. The most important finding of this paper is that a system configuration that shares L2 caches, does not share L1 caches, and does not enforce inclusion between the L1 and L2 caches will produce the highest performance cache and communication hierarchy fo...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2010